Statement of Purpose

نویسندگان

  • Peter P. Budetti
  • Anna Kondratas
چکیده

This journal issue comprises reports concerning program evaluations of key national home visitation models. No single evaluation can answer all the questions of interest about a program, nor is any evaluation perfect, which means that readers must carefully weigh the intended purpose of the evaluation and the evaluation’s strengths and weaknesses before deciding what conclusions can credibly be drawn from its results. This article begins with a discussion of the role of evaluation both in improving programs and in determining program effects. The choices required to craft a strong and methodologically rigorous evaluation are described: what outcomes to measure and how; what methods to use in designing the evaluation and building a comparison group; how many participants to enroll; and how to devise a strong plan for data analysis involving subgroups of the enrolled families. The article then discusses additional factors policymakers and practitioners should consider when interpreting the results of home visiting evaluations: attrition, the policy and functional importance of the outcomes, and the likely generalizability of the results to other communities or other populations. The evaluations that appear in this journal issue are used as examples throughout the article, and the measures that were used in those evaluations are summarized. The evaluations included in this journal issue have both strengths and weaknesses but are probably among the better evaluations in the home visiting field. Deanna S. Gomby, Ph.D., is deputy director of Children, Families, and Communities at The David and Lucile Packard Foundation. The Future of Children HOME VISITING: RECENT PROGRAM EVALUATIONS Vol. 9 • No. 1 – Spring/Summer 1999 The Purposes of Program Evaluation Evaluations of human-service programs are typically designed to answer one or more of the following questions: (1) What services did the program provide? (2) Who received the services? (3) Did the services produce the anticipated outcomes? If the primary purpose of the evaluation is to help program staff hone a new program, then answering the first two questions may be enough. If, on the other hand, the purpose of the evaluation is to persuade funders to continue or expand support or to sponsor replication in other communities, then the question about program outcomes must be answered. These questions are not easily separable, of course, and most of the evaluations in this journal issue include information designed to answer all three questions. As the articles demonstrate, however, answering them can be difficult. This article describes some of the reasons why these questions are important for policymakers and practitioners, why they are hard to answer, and how the evaluations in this journal chose to address them. What Services Did the Program Provide? Answering this question can provide information that can be used both to improve a program and to interpret the results of evaluations focused on program effectiveness. For example, if an evaluation suggests that services are not being delivered as intended, then program administrators may want to institute quality-improvement measures to improve implementation, or they may decide that the model or curriculum should be modified because practice in the field is suggesting a better approach. Program evaluators may use implementation information to make sure that the evaluation is a fair test of the intended intervention and not an evaluation of a poorly implemented shadow. In addition, evaluators can use implementation information to explain the results that their evaluations of outcomes eventually produce. Intensity of Services Several of the reports in this journal issue suggest that families received fewer home visits than were intended by their models— in some cases, families averaged about 40% to 60% of the number of visits intended in the models (among those reporting this information were Parents as Teachers [PAT] and the Nurse Home Visitation Program). For many families, therefore, the intervention that was tested was not as intensive an intervention as the model developers planned. That information by itself cannot determine the next steps, but it can alert program planners to some possible alternatives. For example, it might suggest that visitors need additional training in how to contact hard-toreach families. Alternatively, perhaps the planned intensity level of services is simply unrealistic. Or perhaps the model needs to be modified to make it more interesting, and then parents will seek it out more readily. No matter whether the results are used to improve existing practice or to alter the model, understanding these variations in “dosage” can have implications for understanding the eventual outcomes of any program. Some of the evaluations (for example, see the article by Wagner and Clayton in this journal issue) suggest that families that receive higher-intensity services benefit more than those that receive fewer visits; if this is correct, then knowing that the tested intervention is not delivering as many visits as planned may mean that the program will be less likely to produce the intended benefits. Content of the Visits The content of the home visits may also stray from the intended curriculum. Most of the home visitation programs described in this journal issue have core curricula, but visitors may not always be able to deliver the lesson plans. A mother may be concerned about a sick infant, or may have had a very rough night with an abusing spouse, and she may want to talk about those issues rather than about the presumed topic for the day. The home visitor is likely to set aside the curriculum to address the mother’s more pressing concerns. That ability to respond to parental concerns immediately and with sensitivity is one of the hallmarks of home visitation programs, and is widely seen as one of their strengths. Nevertheless, if such deviation occurs on a regular basis, or if individual home visitors consistently vary their programs as a reflection of their own backgrounds and experiences, then the service the home visitors provide is not the same as what program designers originally proposed. The evaluations in this journal issue do not directly report on this aspect of program implementation, although Baker, Piotrkowski, and Brooks-Gunn suggest in their article about the Home Instruction Program for Preschool Youngsters (HIPPY) that variation in delivery of the intended curriculum does occur. Usually, evaluators try to capture the content of the services through (1) interviews with home visitors conducted some time after the visits occur (see the arti28 THE FUTURE OF CHILDREN – SPRING/SUMMER 1999 For many families, the intervention that was tested was not as intensive an intervention as the model developers planned.emphasis 29 Understanding Evaluations of Home Visitation Programs cle by Baker, Piotrkowski, and Brooks-Gunn in this journal issue), (2) reports by home visitors summarizing what occurs during the lessons, or (3) results of analyses of videotapes of actual home visits (see the article by Wagner and Clayton in this journal issue). If evaluations reveal that differences in program content have occurred, program planners may want to change the model to incorporate the changes the home visitors are making. Or, if they believe the differences reflect poor training, they may institute in-service training or closer supervision to encourage more faithful implementation. From a methodological point of view, however, averaging results across all the home visitors in a particular program, with their own styles and session content, may disguise the differences present, and so mask program effectiveness. Because such individualization of services is inherent in home visiting, it is quite possible that this has occurred to some extent in all the evaluations reported in this journal issue. Only a careful analysis of information concerning what actually occurred during home visits would allow this to be disentangled, and such information is not available for most programs. Ancillary Services The services that are provided in the home are often only a part of the total intervention. Some programs (for example, HIPPY and PAT) offer both home visits and parent group meetings. The HIPPY evaluation reported that some types of families were more likely to attend the group meetings than to persevere with the home visits. If outcomes such as children’s development differed among these families, then knowing who actually made use of the offered services might help explain those results. Most programs also seek to connect families with a range of services in the community, including health and child care services for the children, and employment, housing, transportation, and drug-treatment services for the parents. The services families are referred to and receive essentially become part of the program for those families, and may become a critical element in the observed success or failure of the home visiting program. Evaluators therefore sometimes track the community services families receive (for example, see the article by St.Pierre and Layzer on the Comprehensive Child Development Program [CCDP]), but this is an expensive task that requires the cooperation of participating families and community agencies to either complete interviews or approve the release of family records. For these reasons, few evaluations, including those in this journal issue, capture those ancillary services in great detail. Determining which services families receive may have implications for program quality: If program administrators believe that the strength of their model depends upon linkages of families with other community institutions, but those linkages never occur, then the administrators may seek other strategies to forge those connections. From a methodological point of view, the variability in the extent to which families seek out and access services may decrease the likelihood that an evaluation will detect a difference in overall outcomes for all families. In addition, a model that relies heavily on other community services may suggest that the model’s success in one community will not necessarily translate to another community. Who Received the Services? Describing the people who participated in a program—both those designated as eligible for the program and those who made use of available services—is important for improving the program, for interpreting the results of evaluations, and for making judgments about who should receive services in the future. Typically, information about program participants begins with information about the eligible population: all mothers with The services families are referred to and receive essentially become part of the program and may become a critical element in the observed success or failure of the home visiting program. 30 THE FUTURE OF CHILDREN – SPRING/SUMMER 1999 newborns in the community, or just teens, first-time mothers, or low-income families, and so on. The studies in this journal issue essentially all focused on low-income families, although some programs were offered fairly universally to everyone within a geographic catchment area (for example, PAT), or screened everyone within that area for services, and then offered services to the most needy families (for example, Healthy Families America [HFA] and Hawaii Healthy Start). Although not all of the programs reported information about this issue, those that did (for example, CCDP, Hawaii Healthy Start, and the Nurse Home Visitation Program) suggested that perhaps 10% to 25% of families who are invited to enroll in services refuse to do so. The next step is to describe who actually received services. The evaluations of HIPPY, Hawaii Healthy Start, and HFA suggest that some types of families are more likely than others either to use some aspects of the programs or to continue participation. If understood, these differences can suggest program improvements. For example, the article by Baker, Piotrkowski, and Brooks-Gunn in this journal issue suggests that the HIPPY model should be extended so that programs offer services to overcome barriers that prevent families from taking advantage of existing services. Policymakers can use this information to judge whether a program, when extended to a new community, is likely to have the same effects. And evaluators can use information about program enrollment and participation to explain changes that the program created (or, perhaps, failed to create) among different groups of participants. Did the Services Produce the Anticipated Outcomes? Answering the first two questions—what services were provided and who received them—is a key step in the evaluation of any service program but is not sufficient to reach conclusions about a home visitation program’s effects on children and families. For that, another type of evaluation is required—one that seeks to demonstrate that hoped-for changes in program participants, their families, or their communities have occurred, and that the changes were caused by the program and not by something else. The extent to which the causal connection can be made is largely determined by several key decisions about the design of the evaluation. These include which outcomes will be assessed and how they will be measured, whether and how a comparison group will be constructed, and how many people will be assessed. Choosing Outcomes Ideally, program and evaluation planners should select outcomes carefully based on their implicit and explicit theories of how the program services are supposed to create change, but, in fact, outcomes are usually selected for measurement through a combination of theory and pragmatism. For example, a home visitation program that is supposed to prevent child abuse and neglect may be hypothesized to work in one or more ways: (1) by increasing parental knowledge or altering parental expectations about child development, (2) by changing parental attitudes toward child rearing, (3) by modifying the parent-child interaction, and/or (4) by increasing surveillance that either leads to earlier detection of potential problems or discourages the expression of those problems. Although a change in rates of abuse and neglect is the ultimate goal for the program, accurately measuring such change is difficult for a variety of reasons, including the general reluctance of parents and others to report abuse and neglect and the wide variability in child protective services agencies’ responses to those reports. (See the articles in this journal issue by Duggan and colleagues, by Olds and colleagues, and by Daro and Harding for discussions of this topic.) In other words, assessing changes in child abuse and neglect rates alone may make the program look less effective than it really is and may also mean that evaluators miss changes in intermediate outcomes, What is easiest to document in terms of time and cost may not be the most meaningful or the most accurate measure. 31 Understanding Evaluations of Home Visitation Programs such as parent-child interaction, that are due to the program. Using a theory to guide the choice of outcomes ensures the assessment of intermediate outcomes all along the hypothesized causal chain and can increase confidence that the program is generating a plausible pattern of results. For example, if no differences in intermediate outcomes are found, then it is less likely that differences will be produced in the outcome at the end of the causal chain—whether or not that outcome is actually measured. If a benefit in the outcome at the end of the chain is present but none of the intermediate benefits was produced, then one might look carefully at the results concerning the ultimate outcome to make sure that they seem plausible. With such a causal chain approach toward evaluation, programs that focus on child abuse prevention might measure changes in parent knowledge and attitudes, in parent-child relationships, in rates of emergency room visits for injuries and ingestions (which may reflect physical abuse or neglect), and in child abuse and neglect rates (both reports of suspected maltreatment and confirmed incidents). For most of the home visitation programs discussed in this journal issue, a wide range of outcomes were assessed, some of which are listed in Table 1. These usually focused on outcomes associated with child health and development, including child abuse and neglect; parenting skills, parent behavior, or parent-child interaction; and maternal life course. Many of the programs sought to assess results along the causal chain posited by their underlying models. Those models are illustrated in each of the articles. Results vary such that some programs have fairly consistent patterns of results, and others do not. This may be due to problems in the models or the programs, but the spotty results may also be due to the use of flawed measures to assess the outcomes. Determining How to Measure Outcomes Outcomes can be measured in many ways. What is easiest to document in terms of time and cost (that is, knowledge and attitude changes concerning parenting, measured through the use of paper-and-pencil questionnaires) may not be the most meaningful or the most accurate measure. For example, assessing changes in knowledge or attitudes may not be as important as assessing changes in parent-child interactions. Relying on parents’ self-reports of their interactions with their children or the reports of program staff may not provide as accurate a picture of those interactions as observation by an unbiased professional not associated with the program. The more precise the measurement technique, the more dispassionate the observer, and the more policy relevant the outcome, the more costly and intrusive the evaluation is likely to be. Evaluators prefer to rely on measures that have been tested to confirm that they are valid and reliable measures of the concepts they are supposed to assess for the population that is participating in the program. Across all the studies mentioned in this journal issue, more than 100 measures were used to assess a wide range of outcomes, and the studies reported in the main articles tended to use independent observers. Many of the measures are well known in the research literature. Not all have been investigated with the populations that used them in these studies. Designing an Evaluation: Comparison Groups, Randomization, and Sample Size Perhaps the most critical choice in planning an evaluation involves whether a comparison group is included. This choice determines the extent to which evaluators can reasonably claim that it was the program that caused observed benefits, because a comparison group allows a view of how families would have fared without any intervention. There are many ways to build a comparison group, with random assignment typically viewed as the best approach. Even if a study has a well-designed comparison group, if there are too few families enrolled in the The most critical choice in planning an evaluation involves whether a comparison group is included.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

معرفی دستورالعمل ارتقاء گزارش‌دهی مطالعات مشاهده‌ای در اپیدمیولوژی

Background and Objective: Studies in the health sciences is comprised of observational and intervention. A major part of health sciences research has been allocated to the observational studies. Designing and doing studies based on scientific guidelines that include the entire process, leads to studies validation and also results can be generalized to the community. Thus, for standardizing ...

متن کامل

Analysis the privacy statement of the American Public Libraries and provide privacy statement for public libraries in Iran

Aim: The purpose of this study was to review the privacy statement of the American top public libraries and provide privacy statement for users of public libraries in Iran. Method: The research method is a combination of descriptive survey and Delphi library. The research community consisted of 25 American public libraries based on the rankings of the American Library Association's libraries. T...

متن کامل

A Process for Developing the Statement of Internet Research Ethics based on Action Research Method

Background: Research ethics in cyberspace or Internet research ethics (IRE) is a subset of applied ethics that aims to study, introduce, and apply ethical codes for guiding research activities in cyberspace. The compilation of the ethical statement is based on two methods of documentary research and action research. The action research process is implemented in four stages: 1) diagnosis, 2) act...

متن کامل

Codification Mission Statement and Developmental Strategies of Physical Education and Sport Sciences Faculty of Kharazmi University (2014-2018)

Organizations without strategy are like a ships without a compass. The purpose of this study was to Codification a mission statement and strategies developtnem of Faculty of Physical Education and Sport Sciences Kharazmi University in Horizon 1404. Statistical research samples were 15 persons that included the administrators, active and Physical Education experts who were aware of the situation...

متن کامل

Financial Statement Comparability and the Expected Crash Risk of Stock Prices

The purpose of this study is to explain the relationship between the comparability of financial statements as a qualitative financial reporting feature with the expected risk of stock price crash. The statistical population of this research includes all companies admitted to Tehran Stock Exchange. In order to achieve the research goal, 81 companies were selected for the period between 2010 and ...

متن کامل

Explain the causes of business unit failure using disclosure of independent auditors' reports

Auditing as one of the mechanisms of corporate governance plays an important role in identifying financial crises. In accordance with auditing standards, the auditor is required to disclose any uncertainties that are relevant to the firm's ability to continue operating in the future. The purpose of the present study is to describe the causes of business failure by disclosing the auditor's repor...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999